Robust optimization with simulated annealing
نویسندگان
چکیده
منابع مشابه
Robust optimization with simulated annealing
Complex systems can be optimized to improve the performance with respect to desired functionalities. An optimized solution, however, can become suboptimal or even infeasible, when errors in implementation or input data are encountered. We report on a robust simulated annealing algorithm that does not require any knowledge of the problems structure. This is necessary in many engineering applicat...
متن کاملportfolio optimization with simulated annealing algorithm
the markowitz issue of optimization can’t be solved by precise mathematical methods such as second order schematization, when real world condition and limitations are considered. on the other hand, most managers prefer to manage a small portfolio of available assets in place of a huge portfolio. it can be analogized to cardinal constrains, that is, constrains related to minimum and maximum curr...
متن کاملOptimization by simulated annealing.
There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very ...
متن کاملA new Simulated Annealing algorithm for the robust coloring problem
The Robust Coloring Problem (RCP) is a generalization of the well-known Graph Coloring Problem where we seek for a solution that remains valid when extra edges are added. The RCP is used in scheduling of events with possible last-minute changes and study frequency assignments of the electromagnetic spectrum. This problem has been proved as NP-hard and in instances larger than 30 vertices, meta-...
متن کاملSimulated Annealing and Global Optimization
Nelder-Mead (when you don’t know ∇f ) and steepest descent/conjugate gradient (when you do). Both of these methods are based on attempting to generate a sequence of positions xk with monotonically decreasing f(xk) in the hopes that the xk → x∗, the global minimum for f . If f is a convex function (this happens surprisingly often), and has only one local minimum, these methods are exactly the ri...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Global Optimization
سال: 2009
ISSN: 0925-5001,1573-2916
DOI: 10.1007/s10898-009-9496-x